information resource
The presence of White students and the emergence of Black-White within-school inequalities: two interaction-based mechanisms
This article investigates mechanism-based explanations for a well-known empirical pattern in sociology of education, namely, that Black-White unequal access to school resources-- defined as advanced coursework--is the highest in racially diverse and majority-White schools. Through an empirically calibrated and validated agent-based model, this study explores the dynamics of two qualitatively informed mechanisms, showing (1) that we have reason to believe that the presence of White students in school can influence the emergence of Black-White advanced enrollment disparities and (2) that such influence can represent another possible explanation for the macro-level pattern of interest. Results contribute to current scholarly accounts of within-school inequalities, shedding light into policy strategies to improve the educational experiences of Black students in racially integrated settings. Keywords: Black-White inequalities; agent-based modeling; advanced course-taking; school organization; racial composition.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (8 more...)
- Research Report (1.00)
- Instructional Material > Course Syllabus & Notes (1.00)
Hoarding without hoarders: unpacking the emergence of opportunity hoarding within schools
Sociologists of education increasingly highlight the role of opportunity hoarding in the formation of Black-White educational inequalities. Informed by this literature, this article unpacks the necessary and sufficient conditions under which the hoarding of educational resources emerges within schools. It develops a qualitatively informed agent-based model which captures Black and White students' competition for a valuable school resource: advanced coursework. In contrast to traditional accounts -- which explain the emergence of hoarding through the actions of Whites that keep valuable resources within White communities -- simulations, perhaps surprisingly, show hoarding to arise even when Whites do not play the role of hoarders of resources. Behind this result is the fact that a structural inequality (i.e., racial differences in social class) -- and not action-driven hoarding -- is the necessary condition for hoarding to emerge. Findings, therefore, illustrate that common action-driven understandings of opportunity hoarding can overlook the structural foundations behind this important phenomenon. Policy implications are discussed.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (5 more...)
- Research Report (1.00)
- Instructional Material > Course Syllabus & Notes (1.00)
What is wrong in an Artificial intelligence factory - The Tech Trend
Should you follow the information on artificial intelligence, you will discover two diverging threads. The press and theatre often portray AI using human capacities, mass unemployment, and also a potential robot apocalypse. Scientific conventions, on the other hand, talk progress toward artificial general intelligence when acknowledging that present AI is feeble and not capable of lots of the fundamental elements of the human mind. But no matter where they stand compared to human intellect, now's AI algorithms have become a defining element for several businesses, such as healthcare, finance, production, transport, and a lot more. And quite soon"no area of human endeavor will Stay independent of artificial intelligence," as Harvard Business School professors Marco Iansiti and Karim Lakhani describe in their publication Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World.
- Health & Medicine (0.69)
- Banking & Finance > Economy (0.35)
Limits of Transfer Learning
Williams, Jake, Tadesse, Abel, Sam, Tyler, Sun, Huey, Montanez, George D.
Transfer learning involves taking information and insight from one problem domain and applying it to a new problem domain. Although widely used in practice, theory for transfer learning remains less well-developed. To address this, we prove several novel results related to transfer learning, showing the need to carefully select which sets of information to transfer and the need for dependence between transferred information and target problems. Furthermore, we prove how the degree of probabilistic change in an algorithm using transfer learning places an upper bound on the amount of improvement possible. These results build on the algorithmic search framework for machine learning, allowing the results to apply to a wide range of learning problems using transfer.
- North America > United States > California > Los Angeles County > Claremont (0.04)
- Asia (0.04)
Decomposable Probability-of-Success Metrics in Algorithmic Search
Sam, Tyler, Williams, Jake, Tadesse, Abel, Sun, Huey, Montanez, George
There are three components to a search problem. The first is the finite discrete search space, Ω, which is the set of elements to be examined. Next is the target set, T, which is a nonempty subset of the search space that we are trying to find. Finally, we have an external information resource, F, which provides an evaluation of elements of the search space. Typically, there is a tight relationship between the target set and the external information resource, as the resource is expected to lead to or describe the target set in some way, such as the target set being elements which meet a certain threshold under the external information resource. Within the framework, we have an iterative algorithm which seeks to find elements of the target set, shown in Figure 1. The algorithm is a black-box that has access to a search history and produces a probability distribution over the search space. At each step, the algorithm samples over the search space using the probability distribution, evaluates that element using the information resource, adds the result to the search history, and determines the next probability distribution. The abstraction of finding the next probability distribution as a black-box algorithm allows the search framework to work with all types of search problems.
- Research Report (0.64)
- Workflow (0.48)
The Labeling Distribution Matrix (LDM): A Tool for Estimating Machine Learning Algorithm Capacity
Segura, Pedro Sandoval, Lauw, Julius, Bashir, Daniel, Shah, Kinjal, Sehra, Sonia, Macias, Dominique, Montanez, George
Keywords: Machine Learning, Model Complexity, Algorithm Capacity, VC Dimension, Label Autoencoder Abstract: Algorithm performance in supervised learning is a combination of memorization, generalization, and luck. By estimating how much information an algorithm can memorize from a dataset, we can set a lower bound on the amount of performance due to other factors such as generalization and luck. With this goal in mind, we introduce the Labeling Distribution Matrix (LDM) as a tool for estimating the capacity of learning algorithms. The method attempts to characterize the diversity of possible outputs by an algorithm for different training datasets, using this to measure algorithm flexibility and responsiveness to data. We test the method on several supervised learning algorithms, and find that while the results are not conclusive, the LDM does allow us to gain potentially valuable insight into the prediction behavior of algorithms. We also introduce the Label Autoencoder as an additional tool for estimating algorithm capacity, with more promising initial results. 1 INTRODUCTION Determining the representational complexity of a learning algorithm is a longstanding problem in machine learning.
- North America > United States > Maryland > Prince George's County > College Park (0.14)
- North America > United States > California > Los Angeles County > Claremont (0.04)
The Bias-Expressivity Trade-off
Lauw, Julius, Macias, Dominique, Trikha, Akshay, Vendemiatti, Julia, Montanez, George D.
Learning algorithms need bias to generalize and perform better than random guessing. We examine the flexibility (expressivity) of biased algorithms. An expressive algorithm can adapt to changing training data, altering its outcome based on changes in its input. We measure expressivity by using an information-theoretic notion of entropy on algorithm outcome distributions, demonstrating a trade-off between bias and expressivity. To the degree an algorithm is biased is the degree to which it can outperform uniform random sampling, but is also the degree to which is becomes inflexible. We derive bounds relating bias to expressivity, proving the necessary trade-offs inherent in trying to create strongly performing yet flexible algorithms.
The Futility of Bias-Free Learning and Search
Montanez, George D., Hayase, Jonathan, Lauw, Julius, Macias, Dominique, Trikha, Akshay, Vendemiatti, Julia
Building on the view of machine learning as search, we demonstrate the necessity of bias in learning, quantifying the role of bias (measured relative to a collection of possible datasets, or more generally, information resources) in increasing the probability of success. For a given degree of bias towards a fixed target, we show that the proportion of favorable information resources is strictly bounded from above. Furthermore, we demonstrate that bias is a conserved quantity, such that no algorithm can be favorably biased towards many distinct targets simultaneously. Thus bias encodes trade-offs. The probability of success for a task can also be measured geometrically, as the angle of agreement between what holds for the actual task and what is assumed by the algorithm, represented in its bias. Lastly, finding a favorably biasing distribution over a fixed set of information resources is provably difficult, unless the set of resources itself is already favorable with respect to the given task and algorithm.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > Los Angeles County > Claremont (0.04)
Towards a New Data Modelling Architecture - Part 2
How do we design a data model, how do we connect data, how do we represent information, how do we store or retrieve them? These are all fundamental questions in data modeling but there is a common key to unlock them. You have to start by defining a primitive information resource, and then understand how one can build complex information structures on top of these fundamental units. And this is because everything in nature or systems follow this kind of abstraction from the simple to the most sophisticated. There are patterns that recur at progressively smaller scales. There are fundamental building blocks that can build higher-order structures.
The Famine of Forte: Few Search Problems Greatly Favor Your Algorithm
Casting machine learning as a type of search, we demonstrate that the proportion of problems that are favorable for a fixed algorithm is strictly bounded, such that no single algorithm can perform well over a large fraction of them. Our results explain why we must either continue to develop new learning methods year after year or move towards highly parameterized models that are both flexible and sensitive to their hyperparameters. We further give an upper bound on the expected performance for a search algorithm as a function of the mutual information between the target and the information resource (e.g., training dataset), proving the importance of certain types of dependence for machine learning. Lastly, we show that the expected per-query probability of success for an algorithm is mathematically equivalent to a single-query probability of success under a distribution (called a search strategy), and prove that the proportion of favorable strategies is also strictly bounded. Thus, whether one holds fixed the search algorithm and considers all possible problems or one fixes the search problem and looks at all possible search strategies, favorable matches are exceedingly rare. The forte (strength) of any algorithm is quantifiably restricted.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- North America > United States > New Jersey > Middlesex County > New Brunswick (0.04)